Donald Clark Plan B
What is Plan B? Not Plan A!
Thursday, November 14, 2024
Friday, November 08, 2024
GenAI synthesises and does not copy – huge case won by OpenAI
This big news.
In what is seen as a critical test case, SDNY Judge Colleen McMahon has dismissed the idea that training a LLM is copying. The ruling, (without prejudice) did not provide judgements on what I'm about to say, merely stated the arguments and provides the explanatory detail, which I think is sound.
Generative AI ‘synthesises’, it does not copy. This is central. It’s a bit like our brains, we see, hear and read stuff but memory isn’t copying, it’s a process of synthesis and recall is reconstructive. If you believe in the computational theory of mind, as I do, this makes sense (many don't).
What is even more interesting is the conclusion that the datasets are so large than no one piece is likely to be plagiarised. That, I think is the correct conclusion. It would take 170,000 years for us to read the GPT4 dataset, reading 8 hours a day. Any one piece is quantifiably minuscule.
On the idea that regurgitated data has appeared. It would appear that this problem has been solved (almost), with provenance identified by some systems, such as GPT o1. In other words, don't worry, it was an early artefact of largely early systems.
I was always sure that these cases would result in this type of ruling, as the basic law of copyright depends on copying, and that is not what is happening here. All freshly minted content is based on past content to a degree and here it is not just a matter of degree (it’s minuscule) but also the methods used. Complex case but right rationale.
I think we're seeing many of the ethical objections to AI fade somewhat. There are still issues but we're moving past the rhetorical phase of straw men and outrage, into detailed analysis and examination. This is an important Rubicon to have crossed. Many so called 'ethical' issues are just issues that need to be worked through, rather than waved as flags of opposition. We are seeing the resolution of these issues. Time to move on.
Tuesday, November 05, 2024
AI school opens - learners are not good or bad but fast and slow
What was surprising about this initiative was the strong reaction of outrage and dismissal. It is only 20 people at GCSE level, in a fascinating experiment but you’d think it was Armageddon. We have seen a rise in home schooling and school absences post-Covid. Not all are happy with the current schooling for their children, especially those with special educational needs. Why wouldn’t we want some experimentation in this area and AI is an obvious area to look.
Learners are not good or bad but fast and slow
The pedagogy is sound for some, perhaps not all. Rather than a one-size-fits-all direct instruction, each learner goes at their own pace. Sitting in rows in a classroom, rows in a lecture - that's the model this challenges. The myth is that the traditional model did what many claim it does.
Bloom researched this 50 years ago. Not his famous 2 Sigma paper, which over-egged the effect, but the idea of time to competence. He is best known for his ‘taxonomy’, but he never did draw a pyramid and his taxonomy was radically altered by subsequent researchers, as it was too primitive, rigid and far from representative of how people learn. His more important work in self-paced learning, led him to believe, in ‘Human Characteristics and School Learning’ that learners could master knowledge and skills given enough time. It is not that learners are good or bad but fast and slow. This recognises that the pace of learning varies among individuals rather than being a measure of inherent ability
The artificial constraint of time in timed periods of learning, timetables and fixed-point final exams, is a destructive filter on most. The solution was to loosen up on time to democratise learning to suit the many not the few. Learning is a process not a timed event. Learning, free from the tyranny of time. allows learners to proceed at their own pace.
Bloom proposed three things could make mastery learning fulfil its potential:
1. Entry diagnosis and adaption (50%) - diagnose, predict and recommend
2. Address maturation (25%) - personalise and adapt
3. Instructional methods (25%) - match to different types of learning experiences and time
That is what they are doing here. Lesson plans focus on learners rather than the traditional teacher-centric model. Assessing prior strengths and weaknesses, personalising to focus more on weaknesses and less on things known or mastered. It’s adaptive, personalised learning. The idea that everyone should learn at the exactly same pace, within the same timescale is slightly ridiculous, ruled by the need for timetabling a one to many, classroom model.
Learning coaches
There are three learning coaches, that’s one per 7 pupils, quite a good staff/pupil ratio compared to almost all schools. They are trained to oversee and encourage, rather than teach directly. That’s fine, as the direct instruction is done online.
By outsourcing subject matter expertise to the technology – AI has a degree in every subject, speaks many languages, can be adjusted to any level. It is this access to any subject that is so compelling. I have written about the realisation of a Universal teacher before. It is getting ever nearer.
It is also available 24/7, anyplace, the advantages over a strictly timetabled school are obvious. Holidays can also be taken at any time. These are simply practical advantages.
On top of this are the opportunities to make learning e accessible through adjusting the level of the language and opportunities for T2S and S2T, along with help on dyslexia and other disabilities, at a level way above normal school environments.
Criticisms
One criticism is that this will not developing emotional intelligence, as if single-age groups, sitting 30 or more in a small room encourages this more than smaller groups. They have learning coaches and are still speaking and interacting with each other. Do we say that working remotely from home has the same effect? Yet that has been normalised. At least these students are together in one place.
There is this idea that the only way to develop critical thinking is sitting in a row in a classroom or lecture theatre. Critical thinking is not some isolated skill taught on its own, it needs domain knowledge and this is what this approach encourages. AI can already critique a claim, debate with you and critique your own work. It will also unpack its own reasoning.
There is also plenty of opportunity for creating safe spaces for discussion and debate. Debate and discussion can be fostered formally and informally in this environment. There is even the possibility of debating online adversaries. The learning coaches deal with behaviour, public speaking and debate.
Costs
At an eye watering £27.000 a year, it’s a rich person’s game. With 20 start-up pupils, that’s over half a million revenue straight off the bat. But the cost to the state per pupil is £8500 in Scotland and £7200 elsewhere in the UK. One can see economies of scale emerge quickly if it works. But before spitting out the withering criticism, let’s see if it works.
Conclusion
For the first time in the history of our species we have technology that performs some of the tasks of teaching. We have reached a pivot point where this can be tried and tested. My feeling is that we’ll see a lot more of this, as parents and general teachers can delegate a lot of the exposition and teaching of the subject to the technology. We may just see a breakthrough that transforms education.
Monday, November 04, 2024
5 big surprises in Wharton report on GenAI
Enjoyed this report as it was so goddamn honest, contradicting everything the 'consultants' and 'ethics' folk were recommending for the last year and more!
1. Gen AI Strategy is led internally—NOT by consultants
This bucks the trend. The strategy work is not led by the usual suspects or suspect advisors, many of whom have no real experience of building anything in AI. The bandwagon got bogged down by the sheer weight of hucksters. This technology gives so much agency to individuals within organisations, from to anyone producing text of any kind to coders, that the sisters and brothers are doing it for themselves. You wonder whether consultancy itself is under real threat from AI?
2. NO stringent policies on use in organisations
Interesting. Seems like a contradiction – massive rise in use but little sign of policies being used. I suspect that people have seen through the platitudes that are so often seen in these documents and statements of the blinding obvious, over-egging and exaggerating the ethical dangers.
3. Most employees do NOT face heavy restrictions in accessing Gen AI at their companies
The scepticism, regulatory effort, fear-mongering, even doomsters last year seem to have given way to a more level-headed recognition that this is a technology with real promise, so let's allow folk to use it, as we know they already do! My guess is that this AI on the SLY stuff happened so quickly that organisations just couldn't and didn't know how to respond. Like a lot of tech - it just happens.
4. Companies are adapting by expanding teams and adding Chief AI Officer (CAIO) roles
I wasn’t sure about this, as the first I heard about was in this report! I suspect this is a US thing or exaggerated in the sense of just having someone in the organisation who has emerged as the knowledgeable project manager. Can see it happening though.
5. LESS negativity and scepticism
More decision-makers feel ‘pleased’, ‘excited’, and ‘optimistic’, and less ‘amazed’, ‘curious’ and ‘sceptical’. Negative perceptions are softening, as decision-makers see more promise in Gen AI's ability to enhance jobs without replacing employees. This makes sense. A neat study showed that the scepticism tended to come from those who hadn’t used GenAI in anger. Now that adoption has surged, nearly doubling across functional areas in one year, the practical experimentation has shifted sentiment.
We seem to be going through the usual motions of a technological shift, where we get a period of fierce fear and resistance that gives way to the acceptance that it is all right really. The nay-sayers needed get it out of their systems, before use surges and realism prevails.
https://ai.wharton.upenn.edu/focus-areas/human-technology-interaction/2024-ai-adoption-report/
Wednesday, October 30, 2024
Should L&D be renamed the ‘PERFORMANCE Department’?
4 hours agoL&D is a wagon without a horse, with no pulling power or sense of direction. Could a change of focus provide the direction, momentum and horsepower that has been lacking?
Could we rename Learning & Development, as just ‘Performance’, like Marketing, Legal and Finance Departments? Learning puts too much focus on courses. ‘Development’ is too vague. We must link what we do with measurable outcomes linked to proven performance, productivity and progress.
‘Performance’ puts the individual into the flow of the organisation. we need to foster a work environment where employees feel valued, motivated, and invested in – through their performance, not through a diet of ‘courses’.
A wider perspective seeing learning as performance would link what we do and what needs to be done. The organisation would see results and more sophisticated learning and work solutions related to strategy would emerge.
Linking learning with performance go hand in glove, allowing us to rise above just course delivery into more considered informal and incremental learning, also more performance support, linking inputs with outputs.
Focusing on performance can be challenging and needs a clear causal link between learning initiatives and productivity gains but it can be done.
Of course, we don’t just want ‘faster horses’ as Henry Ford said, we need new different forms of locomotion. The technology of the age is digital and AI. Employees can be more autonomous, self-driven through bottom-up agency.
AI has shown there is an immense thirst among employees for going ahead and doing things faster and better. We use this technology to both learn and get things done, seeing no difference between the two. Learning and doing things faster and better become fused.
In writing something with evidence and import, many use AI to do fast research, write concisely, even critique the proposition. They see AI as a means to an end, the end being solid output.
AI gives us agency to improve ourselves with guidance from the organisation. It fuses learning and productivity, easy to use, fast and motivating. It should be in our domain, our responsibility.
It would also force us to be more data savvy. Forget happy sheets and bums on seats (measures wrong end of learner), let’s get serious with actual evaluative data. AI gives us a data scientist in our pockets – let’s use it.
Of course, AI as just one catalyst for action, there are many others, such as performance support, the wider world of career development, developing strategic skills for future needs and challenges.
The name matters less than the direction of travel. Strategic alignment with organisational goals should be our new goal. It puts an end to those endless discussions about the future of L&D by asking ‘to what problem are we a solution’? We all know it needs to be more performance focused, more relevant, more integrated with the organisation.
There are many surprising things we can learn from research into video and learning. I have given many talks on the subject showing research on video and memory (the transience effect), does learning at x1.5 or x2 affect learning? Do segmentation, length, perspective, picture quality, audio and so on affect learning? Here are 15 THINGS that may shock you from the research… some will surprise you!
But is AI generated video as good as real video in learning?
Leiker et al (2023) in Generative AI for learning looked at this hypothesis.
The study took 83 adult learnersn randomly assigning them into 2 groups:
1. Traditionally produced instructor video
2. Video with realistic AI generated character
Pre and post learning assessment and survey data were used to determine what was learnt and how learners perceived the two types of video.
NO SIGNIFICANT DIFFERENCES
No significant differences were found in either learning or how the videos were perceived. They suggest that AI-generated synthetic, talking head learning videos (limited) are a viable substitute for videos.
This doesn't surprise me. I’ve been creating avatars of myself at increasing levels of fidelity in appearance, movement, lip-synch & voice, speaking many languages from Chinese to Zulu. This involved going into a studio for video capture and separate audio studio for voice capture. A range of services are available from Synthesia to Heygen. These avatars can be used as employees in management training, patients in healthcare training and customers in retail training.
SIGNIFICANT DIFFERENCES
Any form of human interaction can use this technique for training; in instructional videos, trigger videos, branched scenario videos and videos with additional AI generated learning experiences and assessment. In fact, the use of AI can lead to significant UPLIFTS in learning outcomes. In one trial with a client, before GenAI appeared, in 2020, AI enhanced learning resulted in a 61% increase in assessed learning.
INTERACTIVE CHARACTERS
We now have avatars that one can converse with using AI chatbot technology taking it to another level through scenarios and simulations, using real dialogue. We can expect tons of these to appear in computer games (OpenAI have dealings with GTA). But it is in training that they have huge potential. It has been impossible to create high fidelity simulations for soft skills in the past. I created a lot using fixed video clips in interviewing skills, conflict, language training and so on. They took a lot of time to design write and produce. These are about to get a lot quicker and cheaper.
CONCLUSION
The use of AI generated video is already here and will continue to evolve. We are not yet at the level of full drama but the direction of travel is clear.